31 research outputs found
Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering
The most approaches to Knowledge Base Question Answering are based on
semantic parsing. In this paper, we address the problem of learning vector
representations for complex semantic parses that consist of multiple entities
and relations. Previous work largely focused on selecting the correct semantic
relations for a question and disregarded the structure of the semantic parse:
the connections between entities and the directions of the relations. We
propose to use Gated Graph Neural Networks to encode the graph structure of the
semantic parse. We show on two data sets that the graph networks outperform all
baseline models that do not explicitly model the structure. The error analysis
confirms that our approach can successfully process complex semantic parses.Comment: Accepted as COLING 2018 Long Paper, 12 page
Mixing Context Granularities for Improved Entity Linking on Question Answering Data across Entity Categories
The first stage of every knowledge base question answering approach is to
link entities in the input question. We investigate entity linking in the
context of a question answering task and present a jointly optimized neural
architecture for entity mention detection and entity disambiguation that models
the surrounding context on different levels of granularity. We use the Wikidata
knowledge base and available question answering datasets to create benchmarks
for entity linking on question answering data. Our approach outperforms the
previous state-of-the-art system on this data, resulting in an average 8%
improvement of the final score. We further demonstrate that our model delivers
a strong performance across different entity categories.Comment: Accepted as *SEM 2018 Long Paper (co-located with NAACL 2018), 9
page
Deep Laser Cooling of Thulium Atoms to Sub-K Temperatures in Magneto-Optical Trap
Deep laser cooling of atoms, ions, and molecules facilitates the study of
fundamental physics as well as applied research. In this work, we report on the
narrow-line laser cooling of thulium atoms at the wavelength of
with the natural linewidth of , which
widens the limits of atomic cloud parameters control. Temperatures of about
, phase-space density of up to and
number of trapped atoms were achieved. We have also demonstrated
formation of double cloud structure in an optical lattice by adjusting
parameters of the magneto-optical trap. These results can
be used to improve experiments with BEC, atomic interferometers, and optical
clocks.Comment: 12 pages, 6 figure
Message Passing for Complex Question Answering over Knowledge Graphs
Question answering over knowledge graphs (KGQA) has evolved from simple
single-fact questions to complex questions that require graph traversal and
aggregation. We propose a novel approach for complex KGQA that uses
unsupervised message passing, which propagates confidence scores obtained by
parsing an input question and matching terms in the knowledge graph to a set of
possible answers. First, we identify entity, relationship, and class names
mentioned in a natural language question, and map these to their counterparts
in the graph. Then, the confidence scores of these mappings propagate through
the graph structure to locate the answer entities. Finally, these are
aggregated depending on the identified question type. This approach can be
efficiently implemented as a series of sparse matrix multiplications mimicking
joins over small local subgraphs. Our evaluation results show that the proposed
approach outperforms the state-of-the-art on the LC-QuAD benchmark. Moreover,
we show that the performance of the approach depends only on the quality of the
question interpretation results, i.e., given a correct relevance score
distribution, our approach always produces a correct answer ranking. Our error
analysis reveals correct answers missing from the benchmark dataset and
inconsistencies in the DBpedia knowledge graph. Finally, we provide a
comprehensive evaluation of the proposed approach accompanied with an ablation
study and an error analysis, which showcase the pitfalls for each of the
question answering components in more detail.Comment: Accepted in CIKM 201
Knowledge Graphs and Graph Neural Networks for Semantic Parsing
Human communication is inevitably grounded in the real world. Existing work on natural language processing uses structured knowledge bases to ground language expressions. The process of linking entities and relations in a text to world knowledge and composing them into a single coherent structure constitutes a semantic parsing problem. The output of a semantic parser is a grounded semantic representation of a text, which can be universally used for downstream applications, such as fact checking or question answering.
This dissertation is concerned with improving the accuracy of grounding methods and with incorporating the grounding of individual elements and the construction of the full structured representation into one unified method. We present three main contributions:
- we develop new methods to link texts to a knowledge base that integrate context information;
- we introduce Graph Neural Networks for encoding structured semantic representations;
- we explore generalization potential of the developed knowledge-based methods and apply them on natural language understanding tasks.
For our first contribution, we investigate two tasks that focus on linking elements of a text to external knowledge: relation extraction and entity linking. Relation extraction identifies relations in a text and classifies them into one of the types in a knowledge base schema. Traditionally, relations in a sentence are processed one-by-one. Instead, we propose an approach that considers multiple relations simultaneously and improves upon the previous work.
The goal of entity linking is to find and disambiguate entity mentions in a text. A knowledge base contains millions of world entities, which span different categories from common concepts to famous people and place names. We present a new architecture for entity linking that is effective across diverse entity categories.
Our second contribution is centered on a grounded semantic parser.
Previous semantic parsing methods grounded individual elements in isolation and composed them later into a complete semantic representation. Such approaches struggle with semantic representations that include multiple grounded elements, world entities and semantic relations.
We integrate the grounding step and the construction of a full semantic representation into a single architecture.
To encode semantic representations, we adapt Gated Graph Neural Networks for this task for the first time.
Our semantic parsing methods are less prone to error propagation and are more robust for constructing semantic representations with multiple relations. We prove the efficiency of our grounded semantic parser empirically on the challenging open-domain question answering task.
In our third contribution, we cover the extrinsic evaluation of the developed methods on three applications: argumentative reasoning, fact verification and text comprehension. We show that our methods can be successfully transferred to other tasks and datasets that they were not trained on
End-to-end Representation Learning for Question Answering with Weak Supervision
In this paper we present a factoid question answering system for participation in Task 4 of the QALD-7 shared task. Our system is an end-to-end neural architecture for learning a semantic representation of the input question. It iteratively generates representations and uses a convolutional neural network (CNN) model to score them at each step. We take the semantic representation with the highest final score and execute it against Wikidata to retrieve the answers. We show on the Task 4 data set that our system is able to successfully generalize to new data
Context-Aware Representations for Knowledge Base Relation Extraction
We provide a subcorpus of Wikipedia that was annotated with Wikidata relations using a distant supervision procedure. The corpus contains two types of annotations: entities and relations. Entity annotations were extracted from the Wikipedia linkes in the article text. Each link was converted to a Wikidata identifier using the mappings from the Wikidata itself. Additional entities were recognised using a named entity recognizer and were later linked to Wikidata. For each pair of entities in each sentence we searched for Wikidata relations that connect this pair of entities and stored all unambigious instances (only one relation is possible)
Context-Aware Representations for Knowledge Base Relation Extraction
We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence.
We combine the context representations with an attention mechanism to make the final prediction.
Compared to a baseline system, our approach results in an average error reduction of 24% on a held-out set of relations